2,292 research outputs found

    Parameter tuning for enhancing inter-subject emotion classification in four classes for vr-eeg predictive analytics

    Get PDF
    The following research describes the potential in classifying emotions using wearable EEG headset while using a virtual environment to stimulate the responses of the users. Current developments on emotion classification have always steered towards the use of a clinical-grade EEG headset with a 2D monitor screen for stimuli evocations which may introduce additional artifacts or inaccurate readings into the dataset due to users unable to provide their full attention from the given stimuli even though the stimuli presentated should have been advantageous in provoking emotional reactions. Furthermore, the clinical-grade EEG headset requires a lengthy duration to setup and avoiding any hindrance such as hairs hindering the electrodes from collecting the brainwave signals or electrodes coming loose thus requiring additional time to work to fix the issue. With the lengthy duration of setting up the EEG headset, the user may expereince fatigue and become incapable of responding naturally to the emotion being presented from the stimuli. Therefore, this research introduces the use of a wearable low-cost EEG headset with dry electrodes that requires only a trivial amount of time to set up and a Virtual Reality (VR) headset for the presentation of the emotional stimuli in an immersive VR environment which is paired with earphones to provide the full immersive experience needed for the evocation of the emotion. The 360 video stimuli are designed and stitched together according to the arousal-valence space (AVS) model with each quadrant having an 80-second stimuli presentation period followed by a 10-second rest period in between quadrants. The EEG dataset is then collected through the use of a wearable low-cost EEG using four channels located at TP9, TP10, AF7, AF8. The collected dataset is then fed into the machine learning algorithms, namely KNN, SVM and Deep Learning with the dataset focused on inter-subject test approaches using 10-fold cross-validation. The results obtained found that SVM using Radial Basis Function Kernel 1 achieved the highest accuracy at 85.01%. This suggests that the use of a wearable low-cost EEG headset with a significantly lower resolution signal compared to clinical-grade equipment which utilizes only a very limited number of electrodes appears to be highly promising as an emotion classification BCI tool and may thus spur up open up myriad practical, affordable and cost-friendly solutions in applying to the medical, education, military, and entertainment domains

    Four-Class Emotion Classification using Electrocardiography (ECG) in Virtual Reality (VR)

    Get PDF
    The main objective of this paper is to investigate if ECG signals can be utilized to classify emotions based on Russell's four-class circumplex emotion model in a VR environment using SVM classifiers. Electrocardiogram (ECG) signals were collected with a medical-grade wearable heart rate monitor from Empatica (E4 Wristband) and Empatica Realtime Monitor application during this research. ECG was employed as the tool to capture the test subjects’ physiological signals via their heart rate. A preliminary experiment was conducted using a heart rate monitor to gain ECG signal, and a VR Headset for subjects to view 360 degrees video stimuli. A total of 5 subjects participated in this experiment. Data from the 5 subjects were then processed with R Studio using SVM classifier. The data was classified into four distinct emotion classes using both inter-subject classification and intra-subject classification approaches, with inter-subject classification yielding an accuracy of 48% while intrasubject classification ranges from 50% to 74%. These results demonstrate the potential of using ECG as a promising sensor modality for four-class emotion classification in virtual reality using wearable technology

    Comparing Eye-Tracking versus EEG Features for Four-Class Emotion Classification in VR Predictive Analytics

    Get PDF
    This paper presents a novel emotion recognition approach using electroencephalography (EEG) brainwave signals augmented with eye-tracking data in virtual reality (VR) to classify 4-quadrant circumplex model of emotions. 3600 videos are used as the stimuli to evoke user’s emotions (happy, angry, bored, calm) with a VR headset and a pair of earphones. EEG signals are recorded via a wearable EEG brain-computer interfacing (BCI) device and pupil diameter is collected also from a wearable portable eye-tracker. We extract 5 frequency bands which are Delta, Theta, Alpha, Beta, and Gamma from EEG data as well as obtaining pupil diameter from the eye-tracker as the chosen as the eye-related feature for this investigation. Support Vector Machine (SVM) with Radial Basis Function (RBF) kernel is used as the classifier. The best accuracies based on EEG brainwave signals and pupil diameter are 98.44% and 58.30% respectively

    Pushing the boundaries of EEG-based emotion classification using consumer-grade wearable brain-computer interfacing devices and ensemble classifiers

    Get PDF
    Emotion classification using features derived from electroencephalography (EEG) is currently one of the major research areas in big data. Although this area of research is not new, the current challenge is now to move from medical-grade EEG acquisition devices to consumer-grade EEG devices. The overwhelmingly large majority of reported studies that have achieved high success rates in such research uses equipment that is beyond the reach of the everyday consumer. Subsequently, EEG-based emotion classification applications, though highly promising and worthwhile to research, largely remain as academic research and not as deployable solutions. In this study, we attempt to use consumer-grade EEG devices commonly referred to as wearable EEG devices that are very economical in cost but have a limited number of sensor electrodes as well as limited signal resolution. Hence, this greatly reduces the number and quality of available EEG signals that can be used as classification features. Additionally, we also attempt to classify into 4 distinct classes as opposed to the more common 2 or 3 class emotion classification task. Moreover, we also additionally attempt to conduct inter-subject classification rather than just intra-subject classification, which again the former is much more challenging than the latter. Using a test cohort of 31 users with stimuli presented via an immersive virtual reality environment, we present results that show that classification accuracies were able to be pushed to beyond 85% using ensemble classification methods in the form of Random Forest

    Four class emotion classifcation in virtual reality using pupillometry

    Get PDF
    Background: Emotion classifcation remains a challenging problem in afective computing. The large majority of emotion classifcation studies rely on electroencephalography (EEG) and/or electrocardiography (ECG) signals and only classifes the emotions into two or three classes. Moreover, the stimuli used in most emotion classifcation studies utilize either music or visual stimuli that are presented through conventional displays such as computer display screens or television screens. This study reports on a novel approach to recognizing emotions using pupillometry alone in the form of pupil diameter data to classify emotions into four distinct classes according to Russell’s Circumplex Model of Emotions, utilizing emotional stimuli that are presented in a virtual reality (VR) environment. The stimuli used in this experiment are 360° videos presented using a VR headset. Using an eye-tracker, pupil diameter is acquired as the sole classifcation feature. Three classifers were used for the emotion classifcation which are Support Vector Machine (SVM), k-Nearest Neighbor (KNN), and Random Forest (RF). Findings: SVM achieved the best performance for the four-class intra-subject classifcation task at an average of 57.05% accuracy, which is more than twice the accuracy of a random classifer. Although the accuracy can still be signifcantly improved, this study reports on the frst systematic study on the use of eye-tracking data alone without any other supplementary sensor modalities to perform human emotion classifcation and demonstrates that even with a single feature of pupil diameter alone, emotions could be classifed into four distinct classes to a certain level of accuracy. Moreover, the best performance for recognizing a particular class was 70.83%, which was achieved by the KNN classifer for Quadrant 3 emotions. Conclusion: This study presents the frst systematic investigation on the use of pupillometry as the sole feature to classify emotions into four distinct classes using VR stimuli. The ability to conduct emotion classifcation using pupil data alone represents a promising new approach to afective computing as new applications could be developed using readily-available webcams on laptops and other mobile devices that are equipped with cameras without the need for specialized and costly equipment such as EEG and/or ECG as the sensor modality

    Impaired interferon-γ responses, increased interleukin-17 expression, and a tumor necrosis factor–α transcriptional program in invasive aspergillosis

    Get PDF
    This article is available open access through the publisher’s website. Copyright @ 2009 Oxford University Press.Background - Invasive aspergillosis (IA) is the most common cause of death associated with fungal infection in the developed world. Historically, susceptibility to IA has been associated with prolonged neutropenia; however, IA has now become a major problem in patients on calcineurin inhibitors and allogenic hematopoetic stem cell transplant patients following engraftment. These observations suggest complex cellular mechanisms govern immunity to IA. Methods - To characterize the key early events that govern outcome from infection with Aspergillus fumigatus we performed a comparative immunochip microarray analysis of the pulmonary transcriptional response to IA between cyclophosphamide-treated mice and immunocompetent mice at 24 h after infection. Results - We demonstrate that death due to infection is associated with a failure to generate an incremental interferon-γ response, increased levels of interleukin-5 and interleukin-17a transcript, coordinated expression of a network of tumor necrosis factor–α-related genes, and increased levels of tumor necrosis factor–α. In contrast, clearance of infection is associated with increased expression of a number genes encoding proteins involved in innate pathogen clearance, as well as apoptosis and control of inflammation. Conclusion - This first organ-level immune response transcriptional analysis for IA has enabled us to gain new insights into the mechanisms that govern fungal immunity in the lung.The BBSRC, CGD Research Trust, and the MRC

    Discharge Summary Hospital Course Summarisation of In Patient Electronic Health Record Text with Clinical Concept Guided Deep Pre-Trained Transformer Models

    Full text link
    Brief Hospital Course (BHC) summaries are succinct summaries of an entire hospital encounter, embedded within discharge summaries, written by senior clinicians responsible for the overall care of a patient. Methods to automatically produce summaries from inpatient documentation would be invaluable in reducing clinician manual burden of summarising documents under high time-pressure to admit and discharge patients. Automatically producing these summaries from the inpatient course, is a complex, multi-document summarisation task, as source notes are written from various perspectives (e.g. nursing, doctor, radiology), during the course of the hospitalisation. We demonstrate a range of methods for BHC summarisation demonstrating the performance of deep learning summarisation models across extractive and abstractive summarisation scenarios. We also test a novel ensemble extractive and abstractive summarisation model that incorporates a medical concept ontology (SNOMED) as a clinical guidance signal and shows superior performance in 2 real-world clinical data sets

    Comparative analysis of electroencephalogram-based classification of user responses to statically vs. dynamically presented visual stimuli

    Get PDF
    Emotion is an important part of human and it plays important role in human communication. Nowadays, as the use of machine getting more common, the human computer interaction (HCI) has become important. The understanding of user could bring across a better aiding machine. The exploration of using EEG in understanding human is widely studied for benefit in several fields such as neuromarketing and HCI. In this study, we compare the use of 2 different stimuli (3D shapes with motion vs. 2D emotional images that are static) in attempting to classify positive versus negative feelings. A medical-grade 9-electrode Advance Brain Monitoring (ABM) B-alert X10 is used as the brain-computer interface (BCI) acquisition device to obtain the EEG signals. 4 subjects are involved in recording brain signals during viewing 2 types of stimuli. Feature extraction is then applied to the acquired EEG signals to obtain the alpha, beta, gamma, theta and delta rhythms as features using time frequency analysis. Support vector machine (SVM) and K-nearest neighbors (KNN) classifiers are used to train and classify positive and negative feelings for both stimuli using different channels and rhythms. The average accuracy of 3D motion shapes are better than the average accuracy of the 2D static emotional images for both SVM and KNN with 69.88% and 56.35% using SVM for 3D motion shapes and emotional images respectively, and also 65.31% and 55.45% using KNN for 3D motion shapes and emotional images respectively. This study shows that the parietal lobe are more informative in the classification of 3D motion shapes while the Fz channel of the frontal lobe is more informative in classification of 2D static emotional images

    Towards Computer-Generated Cue-Target Mnemonics for E-Learning

    Get PDF
    A novel method to generate memory aids for general forms of knowledge is presented. Mnemonic phrases are constructed using constraints of phonetic similarity to learning material, grammar, semantics, and factual consistency. The method has been implemented in Python using the CMU Pronouncing Dictionary, the CYC AI knowledge base, and Kneser-Ney 5-gram probabilities built from the large-scale COCA text corpus. Initial tests have produced encouraging output
    • …
    corecore